AllLife Bank is a US bank that has a growing customer base. The majority of these customers are liability customers (depositors) with varying sizes of deposits. The number of customers who are also borrowers (asset customers) is quite small, and the bank is interested in expanding this base rapidly to bring in more loan business and in the process, earn more through the interest on loans. In particular, the management wants to explore ways of converting its liability customers to personal loan customers (while retaining them as depositors).
A campaign that the bank ran last year for liability customers showed a healthy conversion rate of over 9% success. This has encouraged the retail marketing department to devise campaigns with better target marketing to increase the success ratio.
You as a Data scientist at AllLife bank have to build a model that will help the marketing department to identify the potential customers who have a higher probability of purchasing the loan.
To predict whether a liability customer will buy personal loans, to understand which customer attributes are most significant in driving purchases, and identify which segment of customers to target more.
ID: Customer IDAge: Customer’s age in completed yearsExperience: #years of professional experienceIncome: Annual income of the customer (in thousand dollars)ZIP Code: Home Address ZIP code.Family: the Family size of the customerCCAvg: Average spending on credit cards per month (in thousand dollars)Education: Education Level. 1: Undergrad; 2: Graduate;3: Advanced/ProfessionalMortgage: Value of house mortgage if any. (in thousand dollars)Personal_Loan: Did this customer accept the personal loan offered in the last campaign? (0: No, 1: Yes)Securities_Account: Does the customer have securities account with the bank? (0: No, 1: Yes)CD_Account: Does the customer have a certificate of deposit (CD) account with the bank? (0: No, 1: Yes)Online: Do customers use internet banking facilities? (0: No, 1: Yes)CreditCard: Does the customer use a credit card issued by any other Bank (excluding All life Bank)? (0: No, 1: Yes)# Installing the libraries with the specified version.
!pip install numpy==1.25.2 pandas==1.5.3 matplotlib==3.7.1 seaborn==0.13.1 scikit-learn==1.2.2 sklearn-pandas==2.2.0 -q --user
Note:
After running the above cell, kindly restart the notebook kernel (for Jupyter Notebook) or runtime (for Google Colab), write the relevant code for the project from the next cell, and run all cells sequentially from the next cell.
On executing the above line of code, you might see a warning regarding package dependencies. This error message can be ignored as the above code ensures that all necessary libraries and their dependencies are maintained to successfully execute the code in this notebook.
# Libraries to help with reading and manipulating data
import pandas as pd
import numpy as np
# Libraries to help with data visualization
import matplotlib.pyplot as plt
import seaborn as sns
# Library to split data and model validation
from sklearn.model_selection import train_test_split, cross_val_score, learning_curve
# To build model for prediction
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
# To get different metric scores
from sklearn.metrics import (
f1_score,
accuracy_score,
recall_score,
precision_score,
confusion_matrix,
classification_report
)
# To suppress unnecessary warnings
import warnings
warnings.filterwarnings("ignore")
# Set style for better visualizations
plt.style.use('default')
sns.set_palette("husl")
print("✅ All libraries imported successfully!")
print("📊 Ready for enhanced business analysis")
# uncomment the following lines if Google Colab is being used
# from google.colab import drive
# drive.mount('/content/drive')
# Load the dataset
Loan = pd.read_csv("Loan_Modelling.csv")
# Create a copy to avoid changes to original data
data = Loan.copy()
print("✅ Dataset loaded successfully!")
print(f"📊 Dataset shape: {data.shape}")
print(f"🎯 Analysis ready to begin")
data.head() # View the first 5 rows of the dataset.
data.tail() # View the last 5 rows of the dataset.
data.shape # To get the shape of the data
data.size # To get size of the data
data.info() # To check the dataTypes
data.describe().T # To get the statistical summary of the data
data.duplicated().sum()
data.nunique()
# Comprehensive Data Quality Report
def comprehensive_data_quality_report(df):
"""
Generate comprehensive data quality report with business insights
"""
print("=== 📋 COMPREHENSIVE DATA QUALITY REPORT ===")
print(f"📊 Dataset Shape: {df.shape}")
print(f"📈 Total Records: {len(df):,}")
print(f"🔢 Total Features: {df.shape[1]}")
print(f"🎯 Target Variable: Personal_Loan")
# Target distribution analysis
if 'Personal_Loan' in df.columns:
target_dist = df['Personal_Loan'].value_counts(normalize=True)
print(f"\n⚖️ CLASS BALANCE ANALYSIS:")
print(f" 📉 No Loan (0): {target_dist[0]:.1%} ({df['Personal_Loan'].value_counts()[0]:,} customers)")
print(f" 📈 Loan Accepted (1): {target_dist[1]:.1%} ({df['Personal_Loan'].value_counts()[1]:,} customers)")
print(f" ⚠️ Class Imbalance Ratio: {target_dist[0]/target_dist[1]:.1f}:1")
if target_dist[1] < 0.15:
print(f" 🚨 ALERT: Significant class imbalance detected - consider balancing techniques")
# Data quality checks
print(f"\n🔍 DATA QUALITY CHECKS:")
missing_values = df.isnull().sum().sum()
duplicate_rows = df.duplicated().sum()
print(f" ❌ Missing Values: {missing_values}")
print(f" 🔄 Duplicate Records: {duplicate_rows}")
# Data type analysis
print(f"\n📋 DATA TYPE ANALYSIS:")
numerical_cols = df.select_dtypes(include=[np.number]).shape[1]
categorical_cols = df.select_dtypes(include=['object']).shape[1]
print(f" 🔢 Numerical Features: {numerical_cols}")
print(f" 📝 Categorical Features: {categorical_cols}")
# Anomaly detection
print(f"\n🚨 ANOMALY DETECTION:")
anomalies_found = False
if 'Experience' in df.columns:
negative_exp = (df['Experience'] < 0).sum()
if negative_exp > 0:
print(f" ⚠️ Negative Experience Values: {negative_exp} records")
anomalies_found = True
if 'Age' in df.columns and 'Experience' in df.columns:
age_exp_issues = (df['Experience'] > df['Age']).sum()
if age_exp_issues > 0:
print(f" ⚠️ Experience > Age: {age_exp_issues} records")
anomalies_found = True
if not anomalies_found:
print(f" ✅ No major anomalies detected")
# High cardinality features
print(f"\n🎯 HIGH CARDINALITY FEATURES:")
unique_counts = df.nunique().sort_values(ascending=False)
high_cardinality = unique_counts[unique_counts > 100]
if len(high_cardinality) > 0:
for feature, count in high_cardinality.items():
print(f" 📊 {feature}: {count:,} unique values")
if count > 1000:
print(f" 🚨 Consider dropping or grouping {feature} (too many categories)")
else:
print(f" ✅ No high cardinality issues detected")
return df.describe()
# Generate comprehensive quality report
summary_stats = comprehensive_data_quality_report(data)
print("\n=== 📈 STATISTICAL SUMMARY ===")
display(summary_stats.round(2))
✅ Strengths:
⚠️ Areas for Attention:
🎯 Business Implications:
The dataset contains 5,000 rows and 14 columns, representing customer demographics and financial behavior.
No missing data or duplicate entries were found, which simplifies preprocessing.
Most variables are numerical or binary, and do not require extensive encoding (except Education and ZIPCode).
The target variable Personal_Loan is binary, suitable for classification models.
ZIPCode has many unique values (high cardinality), which may not be useful without grouping or dropping.
Experience is nearly identical to Age - X (perfectly correlated) — consider dropping it to avoid multicollinearity.
Let's analyze customer segments that are most likely to accept loans, focusing on actionable business insights.
# Business-Focused Customer Segmentation Analysis
def create_business_segments_analysis(df):
"""
Create comprehensive customer segmentation visualizations with business focus
"""
fig, axes = plt.subplots(2, 2, figsize=(16, 12))
fig.suptitle('🎯 Customer Segmentation Analysis for Loan Targeting Strategy', fontsize=16, fontweight='bold')
# 1. Income vs Loan Acceptance with business thresholds
sns.boxplot(data=df, x='Personal_Loan', y='Income', ax=axes[0,0])
axes[0,0].axhline(y=100, color='red', linestyle='--', linewidth=2, label='High Income Threshold ($100k)')
axes[0,0].set_title('💰 Income Distribution by Loan Acceptance\n(Target: >$100k customers)', fontweight='bold')
axes[0,0].set_xlabel('Loan Accepted (0=No, 1=Yes)')
axes[0,0].set_ylabel('Annual Income ($000s)')
axes[0,0].legend()
# Add median income annotations
median_no_loan = df[df['Personal_Loan']==0]['Income'].median()
median_loan = df[df['Personal_Loan']==1]['Income'].median()
axes[0,0].text(0, median_no_loan + 10, f'Median: ${median_no_loan:.0f}k', ha='center', va='bottom',
fontweight='bold', bbox=dict(boxstyle='round', facecolor='lightblue', alpha=0.7))
axes[0,0].text(1, median_loan + 10, f'Median: ${median_loan:.0f}k', ha='center', va='bottom',
fontweight='bold', bbox=dict(boxstyle='round', facecolor='lightgreen', alpha=0.7))
# 2. Education Level Impact with business insights
education_loan = df.groupby('Education')['Personal_Loan'].agg(['mean', 'count']).reset_index()
education_labels = {1: 'Undergrad', 2: 'Graduate', 3: 'Advanced/Prof'}
education_loan['Education_Label'] = education_loan['Education'].map(education_labels)
bars = axes[0,1].bar(education_loan['Education_Label'], education_loan['mean'] * 100,
color=['lightcoral', 'gold', 'lightgreen'])
axes[0,1].set_title('🎓 Loan Acceptance Rate by Education Level\n(Higher education = Higher acceptance)', fontweight='bold')
axes[0,1].set_ylabel('Loan Acceptance Rate (%)')
axes[0,1].set_xlabel('Education Level')
# Add percentage labels on bars
for i, bar in enumerate(bars):
height = bar.get_height()
axes[0,1].text(bar.get_x() + bar.get_width()/2., height + 0.5,
f'{height:.1f}%\n(n={education_loan.iloc[i]["count"]:,})',
ha='center', va='bottom', fontweight='bold')
# 3. Age Group Analysis with business targeting
df_temp = df.copy()
df_temp['Age_Group'] = pd.cut(df_temp['Age'], bins=[0, 30, 40, 50, 100],
labels=['<30', '30-40', '40-50', '50+'])
age_loan = df_temp.groupby('Age_Group')['Personal_Loan'].agg(['mean', 'count']).reset_index()
bars = axes[1,0].bar(age_loan['Age_Group'], age_loan['mean'] * 100,
color=['lightblue', 'gold', 'lightcoral', 'lightgray'])
axes[1,0].set_title('👥 Loan Acceptance Rate by Age Group\n(30-40 age group shows highest acceptance)', fontweight='bold')
axes[1,0].set_ylabel('Loan Acceptance Rate (%)')
axes[1,0].set_xlabel('Age Group')
# Add percentage labels
for i, bar in enumerate(bars):
height = bar.get_height()
axes[1,0].text(bar.get_x() + bar.get_width()/2., height + 0.2,
f'{height:.1f}%\n(n={age_loan.iloc[i]["count"]:,})',
ha='center', va='bottom', fontweight='bold')
# 4. Credit Card Spending vs Loan Acceptance with business thresholds
df_temp['CCAvg_Category'] = pd.cut(df_temp['CCAvg'],
bins=[0, 1, 2.5, 5, float('inf')],
labels=['Low (<$1k)', 'Medium ($1-2.5k)', 'High ($2.5-5k)', 'Premium (>$5k)'])
cc_loan = df_temp.groupby('CCAvg_Category')['Personal_Loan'].agg(['mean', 'count']).reset_index()
bars = axes[1,1].bar(range(len(cc_loan)), cc_loan['mean'] * 100,
color=['lightcoral', 'gold', 'lightgreen', 'darkgreen'])
axes[1,1].set_title('💳 Loan Acceptance by Credit Card Spending\n(Higher spending = Higher acceptance)', fontweight='bold')
axes[1,1].set_ylabel('Loan Acceptance Rate (%)')
axes[1,1].set_xlabel('Monthly Credit Card Spending')
axes[1,1].set_xticks(range(len(cc_loan)))
axes[1,1].set_xticklabels(cc_loan['CCAvg_Category'], rotation=45)
# Add percentage labels
for i, bar in enumerate(bars):
height = bar.get_height()
axes[1,1].text(bar.get_x() + bar.get_width()/2., height + 1,
f'{height:.1f}%\n(n={cc_loan.iloc[i]["count"]:,})',
ha='center', va='bottom', fontweight='bold')
plt.tight_layout()
plt.show()
# Print key business insights
print("\n=== 🎯 KEY BUSINESS INSIGHTS ===")
print(f"💰 Income Impact: Customers with loans have median income of ${median_loan:.0f}k vs ${median_no_loan:.0f}k")
print(f"🎓 Education Effect: Advanced degree holders are {education_loan.iloc[2]['mean']/education_loan.iloc[0]['mean']:.1f}x more likely to accept loans")
# Age group insights
best_age_group = age_loan.loc[age_loan['mean'].idxmax(), 'Age_Group']
best_age_rate = age_loan['mean'].max() * 100
print(f"👥 Optimal Age Group: {best_age_group} with {best_age_rate:.1f}% acceptance rate")
# Credit card spending insights
if len(cc_loan) > 1:
cc_improvement = (cc_loan.iloc[-1]['mean'] / cc_loan.iloc[0]['mean']) if cc_loan.iloc[0]['mean'] > 0 else 0
print(f"💳 Spending Impact: Premium spenders are {cc_improvement:.1f}x more likely to accept loans")
return df_temp
# Create enhanced business-focused visualizations
data_with_segments = create_business_segments_analysis(data)
Analyzing existing bank product relationships to identify cross-selling opportunities.
# Cross-selling Opportunity Analysis
def analyze_cross_selling_opportunities(df):
"""
Analyze cross-selling opportunities with existing bank products
"""
fig, axes = plt.subplots(1, 3, figsize=(18, 6))
fig.suptitle('🏦 Cross-Selling Opportunities Analysis', fontsize=16, fontweight='bold')
# CD Account Impact
cd_analysis = df.groupby('CD_Account')['Personal_Loan'].agg(['mean', 'count']).reset_index()
cd_labels = ['No CD Account', 'Has CD Account']
bars1 = axes[0].bar(cd_labels, cd_analysis['mean'] * 100, color=['lightcoral', 'lightgreen'])
axes[0].set_title('💎 CD Account Holders vs Loan Acceptance\n(CD customers show higher conversion)', fontweight='bold')
axes[0].set_ylabel('Loan Acceptance Rate (%)')
for i, bar in enumerate(bars1):
height = bar.get_height()
axes[0].text(bar.get_x() + bar.get_width()/2., height + 1,
f'{height:.1f}%\n(n={cd_analysis.iloc[i]["count"]:,})',
ha='center', va='bottom', fontweight='bold')
# Securities Account Impact
sec_analysis = df.groupby('Securities_Account')['Personal_Loan'].agg(['mean', 'count']).reset_index()
sec_labels = ['No Securities', 'Has Securities']
bars2 = axes[1].bar(sec_labels, sec_analysis['mean'] * 100, color=['lightcoral', 'lightblue'])
axes[1].set_title('📈 Securities Account Impact\n(Investment customers show higher acceptance)', fontweight='bold')
axes[1].set_ylabel('Loan Acceptance Rate (%)')
for i, bar in enumerate(bars2):
height = bar.get_height()
axes[1].text(bar.get_x() + bar.get_width()/2., height + 1,
f'{height:.1f}%\n(n={sec_analysis.iloc[i]["count"]:,})',
ha='center', va='bottom', fontweight='bold')
# Combined Product Holdings
df['Product_Portfolio'] = df['CD_Account'] + df['Securities_Account']
portfolio_labels = {0: 'No Investment\nProducts', 1: 'One Investment\nProduct', 2: 'Both Investment\nProducts'}
portfolio_analysis = df.groupby('Product_Portfolio')['Personal_Loan'].agg(['mean', 'count']).reset_index()
portfolio_analysis['Portfolio_Label'] = portfolio_analysis['Product_Portfolio'].map(portfolio_labels)
bars3 = axes[2].bar(portfolio_analysis['Portfolio_Label'], portfolio_analysis['mean'] * 100,
color=['lightcoral', 'gold', 'lightgreen'])
axes[2].set_title('🎯 Investment Product Portfolio Impact\n(More products = Higher loan acceptance)', fontweight='bold')
axes[2].set_ylabel('Loan Acceptance Rate (%)')
axes[2].tick_params(axis='x', rotation=0)
for i, bar in enumerate(bars3):
height = bar.get_height()
axes[2].text(bar.get_x() + bar.get_width()/2., height + 1,
f'{height:.1f}%\n(n={portfolio_analysis.iloc[i]["count"]:,})',
ha='center', va='bottom', fontweight='bold')
plt.tight_layout()
plt.show()
# Print key cross-selling insights
print("\n=== 🎯 KEY CROSS-SELLING INSIGHTS ===")
if len(cd_analysis) > 1:
cd_lift = (cd_analysis.iloc[1]['mean'] / cd_analysis.iloc[0]['mean'] - 1) * 100
print(f"💎 CD Account holders are {cd_lift:.0f}% more likely to accept loans")
if len(sec_analysis) > 1:
sec_lift = (sec_analysis.iloc[1]['mean'] / sec_analysis.iloc[0]['mean'] - 1) * 100
print(f"📈 Securities Account holders are {sec_lift:.0f}% more likely to accept loans")
if len(portfolio_analysis) > 1:
portfolio_lift = (portfolio_analysis.iloc[-1]['mean'] / portfolio_analysis.iloc[0]['mean'] - 1) * 100
print(f"🎯 Customers with both investment products are {portfolio_lift:.0f}% more likely to accept loans")
print(f"\n💡 BUSINESS RECOMMENDATION:")
print(f" 🎯 Prioritize marketing to existing CD and Securities account holders")
print(f" 📞 Implement cross-selling triggers when customers open investment accounts")
print(f" 💰 Focus premium loan products on multi-product customers")
# Run cross-selling analysis
analyze_cross_selling_opportunities(data)
data = data.drop('ID', axis=1) # The code to drop a column from the dataframe
data["Experience"].unique()
# checking for experience <0
data[data["Experience"] < 0]["Experience"].unique()
# Correcting the experience values
data["Experience"].replace(-1, 1, inplace=True)
data["Experience"].replace(-2, 2, inplace=True)
data["Experience"].replace(-3, 3, inplace=True)
data["Education"].unique()
# checking the number of uniques in the zip code
data["ZIPCode"].nunique()
data["ZIPCode"] = data["ZIPCode"].astype(str)
print(
"Number of unique values if we take first two digits of ZIPCode: ",
data["ZIPCode"].str[0:2].nunique(),
)
data["ZIPCode"] = data["ZIPCode"].str[0:2]
data["ZIPCode"] = data["ZIPCode"].astype("category")
# Converting the data type of categorical features to 'category'
cat_cols = [
"Education",
"Personal_Loan",
"Securities_Account",
"CD_Account",
"Online",
"CreditCard",
"ZIPCode",
]
data[cat_cols] = data[cat_cols].astype("category")
def histogram_boxplot(data, feature, figsize=(12, 7), kde=False, bins=None):
"""
Boxplot and histogram combined
data: dataframe
feature: dataframe column
figsize: size of figure (default (12,7))
kde: whether to show the density curve (default False)
bins: number of bins for histogram (default None)
"""
f2, (ax_box2, ax_hist2) = plt.subplots(
nrows=2, # Number of rows of the subplot grid= 2
sharex=True, # x-axis will be shared among all subplots
gridspec_kw={"height_ratios": (0.25, 0.75)},
figsize=figsize,
) # creating the 2 subplots
sns.boxplot(
data=data, x=feature, ax=ax_box2, showmeans=True, color="violet"
) # boxplot will be created and a star will indicate the mean value of the column
sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2, bins=bins, palette="winter"
) if bins else sns.histplot(
data=data, x=feature, kde=kde, ax=ax_hist2
) # For histogram
ax_hist2.axvline(
data[feature].mean(), color="green", linestyle="--"
) # Add mean to the histogram
ax_hist2.axvline(
data[feature].median(), color="black", linestyle="-"
) # Add median to the histogram
# function to create labeled barplots
def labeled_barplot(data, feature, perc=False, n=None):
"""
Barplot with percentage at the top
data: dataframe
feature: dataframe column
perc: whether to display percentages instead of count (default is False)
n: displays the top n category levels (default is None, i.e., display all levels)
"""
total = len(data[feature]) # length of the column
count = data[feature].nunique()
if n is None:
plt.figure(figsize=(count + 1, 5))
else:
plt.figure(figsize=(n + 1, 5))
plt.xticks(rotation=90, fontsize=15)
ax = sns.countplot(
data=data,
x=feature,
palette="Paired",
order=data[feature].value_counts().index[:n].sort_values(),
)
for p in ax.patches:
if perc == True:
label = "{:.1f}%".format(
100 * p.get_height() / total
) # percentage of each class of the category
else:
label = p.get_height() # count of each level of the category
x = p.get_x() + p.get_width() / 2 # width of the plot
y = p.get_height() # height of the plot
ax.annotate(
label,
(x, y),
ha="center",
va="center",
size=12,
xytext=(0, 5),
textcoords="offset points",
) # annotate the percentage
plt.show() # show the plot
histogram_boxplot(data, "Age")
histogram_boxplot(data, "Experience") # The code to create histogram_boxplot for experience
histogram_boxplot(data, "Income") # The code to create histogram_boxplot for Income
histogram_boxplot(data, "CCAvg") # The code to create histogram_boxplot for CCAvg
histogram_boxplot(data, "Mortgage") # The code to create histogram_boxplot for Mortgage
labeled_barplot(data, "Family", perc=True)
labeled_barplot(data, "Education", perc=True) # The code to create labeled_barplot for Education
labeled_barplot(data, "Securities_Account", perc=True ) # The code to create labeled_barplot for Securities_Account
labeled_barplot(data, "CD_Account", perc=True) # The code to create labeled_barplot for CD_Account
labeled_barplot(data, "Online", perc=True) # The code to create labeled_barplot for Online
labeled_barplot(data, "ZIPCode", perc=True) # The code to create labeled_barplot for ZIPCode
labeled_barplot(data, "CreditCard", perc=True) # The code to create labeled_barplot for CreditCard
Most numerical variables such as Income, CCAvg, and Mortgage show right-skewed distributions, indicating that most customers have low values with a few having very high values.
Categorical variables like Family, Education, CreditCard, and CD_Account exhibit imbalanced distributions — for example, most customers belong to smaller families or have no credit card.
The target variable Personal_Loan is skewed toward 0 (non-loan customers), highlighting a class imbalance problem.
def stacked_barplot(data, predictor, target):
"""
Print the category counts and plot a stacked bar chart
data: dataframe
predictor: independent variable
target: target variable
"""
count = data[predictor].nunique()
sorter = data[target].value_counts().index[-1]
tab1 = pd.crosstab(data[predictor], data[target], margins=True).sort_values(
by=sorter, ascending=False
)
print(tab1)
print("-" * 120)
tab = pd.crosstab(data[predictor], data[target], normalize="index").sort_values(
by=sorter, ascending=False
)
tab.plot(kind="bar", stacked=True, figsize=(count + 5, 5))
plt.legend(
loc="lower left", frameon=False,
)
plt.legend(loc="upper left", bbox_to_anchor=(1, 1))
plt.show()
### function to plot distributions wrt target
def distribution_plot_wrt_target(data, predictor, target):
fig, axs = plt.subplots(2, 2, figsize=(12, 10))
target_uniq = data[target].unique()
axs[0, 0].set_title("Distribution of target for target=" + str(target_uniq[0]))
sns.histplot(
data=data[data[target] == target_uniq[0]],
x=predictor,
kde=True,
ax=axs[0, 0],
color="teal",
stat="density",
)
axs[0, 1].set_title("Distribution of target for target=" + str(target_uniq[1]))
sns.histplot(
data=data[data[target] == target_uniq[1]],
x=predictor,
kde=True,
ax=axs[0, 1],
color="orange",
stat="density",
)
axs[1, 0].set_title("Boxplot w.r.t target")
sns.boxplot(data=data, x=target, y=predictor, ax=axs[1, 0], palette="gist_rainbow")
axs[1, 1].set_title("Boxplot (without outliers) w.r.t target")
sns.boxplot(
data=data,
x=target,
y=predictor,
ax=axs[1, 1],
showfliers=False,
palette="gist_rainbow",
)
plt.tight_layout()
plt.show()### function to plot distributions wrt target
def distribution_plot_wrt_target(data, predictor, target):
fig, axs = plt.subplots(2, 2, figsize=(12, 10))
target_uniq = data[target].unique()
axs[0, 0].set_title("Distribution of target for target=" + str(target_uniq[0]))
sns.histplot(
data=data[data[target] == target_uniq[0]],
x=predictor,
kde=True,
ax=axs[0, 0],
color="teal",
stat="density",
)
axs[0, 1].set_title("Distribution of target for target=" + str(target_uniq[1]))
sns.histplot(
data=data[data[target] == target_uniq[1]],
x=predictor,
kde=True,
ax=axs[0, 1],
color="orange",
stat="density",
)
axs[1, 0].set_title("Boxplot w.r.t target")
sns.boxplot(data=data, x=target, y=predictor, ax=axs[1, 0], palette="gist_rainbow")
axs[1, 1].set_title("Boxplot (without outliers) w.r.t target")
sns.boxplot(
data=data,
x=target,
y=predictor,
ax=axs[1, 1],
showfliers=False,
palette="gist_rainbow",
)
plt.tight_layout()
plt.show()
plt.figure(figsize=(15, 7))
sns.heatmap(data.corr(numeric_only=True), annot=True, vmin=-1, vmax=1, fmt=".2f", cmap="Spectral") # The code to get the heatmap of the data
plt.show()
stacked_barplot(data, "Education", "Personal_Loan")
stacked_barplot(data,"Family", "Personal_Loan") ## The code to plot stacked barplot for Personal Loan and Family
stacked_barplot(data,"Securities_Account", "Personal_Loan") # The code to plot stacked barplot for Personal Loan and Securities_Account
stacked_barplot(data,"CD_Account", "Personal_Loan") # The code to plot stacked barplot for Personal Loan and CD_Account
stacked_barplot(data,"Online", "Personal_Loan") # The code to plot stacked barplot for Personal Loan and Online
stacked_barplot(data,"CreditCard", "Personal_Loan") # The code to plot stacked barplot for Personal Loan and CreditCard
stacked_barplot(data,"ZIPCode", "Personal_Loan") # The code to plot stacked barplot for Personal Loan and ZIPCode
distribution_plot_wrt_target(data, "Age", "Personal_Loan")
distribution_plot_wrt_target(data, "Experience", "Personal_Loan") # The code to plot stacked barplot for Personal Loan and Experience
distribution_plot_wrt_target(data, "Income", "Personal_Loan") # The code to plot stacked barplot for Personal Loan and Income
distribution_plot_wrt_target(data, "CCAvg", "Personal_Loan") # The code to plot stacked barplot for Personal Loan and CCAvg
Q1 = data.select_dtypes(include=["float64", "int64"]).quantile(0.25) # To find the 25th percentile and 75th percentile.
Q3 = data.select_dtypes(include=["float64", "int64"]).quantile(0.75)
IQR = Q3 - Q1 # Inter Quantile Range (75th perentile - 25th percentile)
lower = (
Q1 - 1.5 * IQR
) # Finding lower and upper bounds for all values. All values outside these bounds are outliers
upper = Q3 + 1.5 * IQR
(
(data.select_dtypes(include=["float64", "int64"]) < lower)
| (data.select_dtypes(include=["float64", "int64"]) > upper)
).sum() / len(data) * 100
plt.figure(figsize=(15,15))
# sns.pairplot(loan, diag_kind='kde')
sns.pairplot(data, hue="Personal_Loan")
plt.show()
Income and CCAvg: Customers with higher income and higher average credit card spending are more likely to take personal loans.
Education: Loan acceptance increases with education level, especially among those with graduate or advanced degrees.
Age: Loan uptake is higher in the 30–40 age group, and decreases for older and very young customers.
CD_Account and Securities_Account: Customers with existing CD accounts are significantly more likely to accept a loan.
Outliers were detected in numeric variables like Income, CCAvg, and Mortgage, especially on the higher end.
These outliers may impact model performance by skewing the decision boundaries, especially in algorithms sensitive to scale like logistic regression or k-NN.
Depending on the modeling technique, consider capping, log-transforming, or removing extreme outliers for robustness.
The distribution of key numeric features such as Income, CCAvg, and Mortgage is right-skewed, indicating a concentration of customers at lower values with a few high-value outliers.
A large proportion of customers do not own a mortgage or a credit card, which may influence their loan approval likelihood.
The Personal_Loan target variable is imbalanced, with relatively fewer customers opting for a personal loan.
Features like Income, CCAvg, and CD_Account exhibit a strong relationship with loan purchase decisions.
Higher education levels and middle-aged customers (30–40 years) tend to show greater interest in personal loans, revealing potential customer segments.
Questions: What is the distribution of mortgage attribute? Are there any noticeable patterns or outliers in the distribution?
Observation: Most customers have a Mortgage of 0, indicating many don't have existing mortgages. However, there are some outliers with very high mortgage values, which could skew model performance if not handled properly.
Questions:
Observation: A significant portion of customers do not have credit cards. The ratio is important when considering cross-sell opportunities or risk profiling.
Questions: What are the attributes that have a strong correlation with the target attribute (personal loan)?
Observation: Attributes like Income, CCAvg, CD_Account, and Education show a positive correlation with Personal_Loan. These are key candidates for model features.
Questions: How does a customer's interest in purchasing a loan vary with their age?
Observation: Loan interest peaks around ages 30–40. Younger and much older customers are less likely to opt for personal loans.
Questions: How does a customer's interest in purchasing a loan vary with their education?
Observation: Customers with higher education levels (2 and 3) are more likely to purchase loans, possibly due to better income profiles or financial literacy.
# dropping Experience as it is perfectly correlated with Age
X = data.drop(["Personal_Loan", "Experience"], axis=1)
Y = data["Personal_Loan"]
X = pd.get_dummies(X, columns=["ZIPCode", "Education"], drop_first=True)
X = X.astype(float)
# Splitting data in train and test sets
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size=0.30, random_state=1
)
print("Shape of Training set : ", X_train.shape)
print("Shape of test set : ", X_test.shape)
print("Percentage of classes in training set:")
print(y_train.value_counts(normalize=True))
print("Percentage of classes in test set:")
print(y_test.value_counts(normalize=True))
Creating comprehensive evaluation functions that include both technical and business metrics.
# Enhanced Model Performance Evaluation Function
def evaluate_model_performance_comprehensive(model, X_test, y_test, model_name="Model"):
"""
Comprehensive model evaluation with business metrics
"""
predictions = model.predict(X_test)
probabilities = model.predict_proba(X_test)[:, 1] if hasattr(model, 'predict_proba') else None
# Calculate standard metrics
accuracy = accuracy_score(y_test, predictions)
precision = precision_score(y_test, predictions)
recall = recall_score(y_test, predictions)
f1 = f1_score(y_test, predictions)
# Business metrics
tn, fp, fn, tp = confusion_matrix(y_test, predictions).ravel()
# Calculate business impact metrics
total_customers = len(y_test)
actual_loan_customers = sum(y_test)
predicted_loan_customers = sum(predictions)
# Marketing efficiency metrics
marketing_efficiency = tp / predicted_loan_customers if predicted_loan_customers > 0 else 0
customer_coverage = tp / actual_loan_customers if actual_loan_customers > 0 else 0
# Cost-benefit analysis (assuming $50 cost per targeted customer, $2000 profit per loan)
cost_per_customer = 50
profit_per_loan = 2000
marketing_cost = predicted_loan_customers * cost_per_customer
revenue_generated = tp * profit_per_loan
net_profit = revenue_generated - marketing_cost
roi_percentage = (net_profit / marketing_cost * 100) if marketing_cost > 0 else 0
results = {
'Model': model_name,
'Accuracy': f"{accuracy:.1%}",
'Precision': f"{precision:.1%}",
'Recall': f"{recall:.1%}",
'F1_Score': f"{f1:.1%}",
'Marketing_Efficiency': f"{marketing_efficiency:.1%}",
'Customer_Coverage': f"{customer_coverage:.1%}",
'Predicted_Targets': f"{predicted_loan_customers:,}",
'True_Positives': f"{tp:,}",
'False_Positives': f"{fp:,}",
'Marketing_Cost': f"${marketing_cost:,.0f}",
'Revenue_Generated': f"${revenue_generated:,.0f}",
'Net_Profit': f"${net_profit:,.0f}",
'ROI_Percentage': f"{roi_percentage:.1f}%"
}
return pd.DataFrame([results])
# Function to create detailed confusion matrix with business context
def plot_business_confusion_matrix(model, X_test, y_test, model_name):
"""
Create confusion matrix with business interpretation
"""
predictions = model.predict(X_test)
cm = confusion_matrix(y_test, predictions)
plt.figure(figsize=(10, 8))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues',
xticklabels=['No Loan', 'Loan Accepted'],
yticklabels=['No Loan', 'Loan Accepted'])
plt.title(f'🎯 Business Impact Confusion Matrix - {model_name}\n' +
'True Positives = Successful Targeting | False Positives = Wasted Marketing',
fontweight='bold', fontsize=12)
plt.xlabel('Predicted')
plt.ylabel('Actual')
# Add business interpretation
tn, fp, fn, tp = cm.ravel()
plt.text(0.5, -0.15, f'💰 Marketing Efficiency: {tp}/{tp+fp} = {tp/(tp+fp)*100:.1f}% of targeted customers accept loans',
transform=plt.gca().transAxes, ha='center', fontweight='bold')
plt.text(0.5, -0.20, f'📈 Customer Coverage: {tp}/{tp+fn} = {tp/(tp+fn)*100:.1f}% of potential customers identified',
transform=plt.gca().transAxes, ha='center', fontweight='bold')
plt.tight_layout()
plt.show()
return cm
print("✅ Enhanced evaluation functions ready!")
print("🎯 Now including business metrics: ROI, Marketing Efficiency, Customer Coverage")
To evaluate the performance of classification models, I use the following key metrics:
Accuracy: Measures the proportion of total correct predictions. Useful when the classes are balanced.
Precision: Indicates how many of the positively predicted cases were actually positive. Important in reducing false positives.
Recall: Reflects how many actual positive cases were correctly predicted. Crucial in minimizing false negatives.
F1 Score: Harmonic mean of precision and recall. It is a balanced measure especially useful when class distribution is imbalanced.
Given that the goal is to predict whether a customer will accept a personal loan offer (which may be an imbalanced target), F1 score and Recall will be especially important to consider alongside Accuracy.
# defining a function to compute different metrics to check performance of a classification model built using sklearn
def model_performance_classification_sklearn(model, predictors, target):
"""
Function to compute different metrics to check classification model performance
model: classifier
predictors: independent variables
target: dependent variable
"""
# predicting using the independent variables
pred = model.predict(predictors)
acc = accuracy_score(target, pred) # to compute Accuracy
recall = recall_score(target, pred) # to compute Recall
precision = precision_score(target, pred) # to compute Precision
f1 = f1_score(target, pred) # to compute F1-score
# creating a dataframe of metrics
df_perf = pd.DataFrame(
{"Accuracy": acc, "Recall": recall, "Precision": precision, "F1": f1,},
index=[0],
)
return df_perf
def confusion_matrix_sklearn(model, predictors, target):
"""
To plot the confusion_matrix with percentages
model: classifier
predictors: independent variables
target: dependent variable
"""
y_pred = model.predict(predictors)
cm = confusion_matrix(target, y_pred)
labels = np.asarray(
[
["{0:0.0f}".format(item) + "\n{0:.2%}".format(item / cm.flatten().sum())]
for item in cm.flatten()
]
).reshape(2, 2)
plt.figure(figsize=(6, 4))
sns.heatmap(cm, annot=labels, fmt="")
plt.ylabel("True label")
plt.xlabel("Predicted label")
model = DecisionTreeClassifier(criterion="gini", random_state=1)
model.fit(X_train, y_train)
confusion_matrix_sklearn(model, X_train, y_train)
decision_tree_perf_train = model_performance_classification_sklearn(
model, X_train, y_train
)
decision_tree_perf_train
feature_names = list(X_train.columns)
print(feature_names)
plt.figure(figsize=(20, 30))
out = tree.plot_tree(
model,
feature_names=feature_names,
filled=True,
fontsize=9,
node_ids=False,
class_names=None,
)
# below code will add arrows to the decision tree split if they are missing
for o in out:
arrow = o.arrow_patch
if arrow is not None:
arrow.set_edgecolor("black")
arrow.set_linewidth(1)
plt.show()
# Text report showing the rules of a decision tree -
print(tree.export_text(model, feature_names=feature_names, show_weights=True))
# importance of features in the tree building ( The importance of a feature is computed as the
# (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance )
print(
pd.DataFrame(
model.feature_importances_, columns=["Imp"], index=X_train.columns
).sort_values(by="Imp", ascending=False)
)
importances = model.feature_importances_
indices = np.argsort(importances)
plt.figure(figsize=(8, 8))
plt.title("Feature Importances")
plt.barh(range(len(indices)), importances[indices], color="violet", align="center")
plt.yticks(range(len(indices)), [feature_names[i] for i in indices])
plt.xlabel("Relative Importance")
plt.show()
confusion_matrix_sklearn(model, X_test, y_test) # Code to create confusion matrix for test data
decision_tree_perf_test = model_performance_classification_sklearn(model, X_test, y_test) # The code to check performance on test data
decision_tree_perf_test
The initial Decision Tree model provided a quick baseline for classification, with moderate accuracy and high interpretability.
However, it showed signs of overfitting — performing better on training data than on test data — due to the model growing too deep without constraints.
Key features like Income, CCAvg, and CD_Account were dominant in the tree structure, aligning with earlier EDA findings.
While the model captured some patterns well, it lacked generalization power, signaling the need for pruning or hyperparameter tuning (e.g., controlling max_depth, min_samples_split, and ccp_alpha).
# Define the parameters of the tree to iterate over
max_depth_values = np.arange(2, 7, 2)
max_leaf_nodes_values = [50, 75, 150, 250]
min_samples_split_values = [10, 30, 50, 70]
# Initialize variables to store the best model and its performance
best_estimator = None
best_score_diff = float('inf')
best_test_score = 0.0
# Iterate over all combinations of the specified parameter values
for max_depth in max_depth_values:
for max_leaf_nodes in max_leaf_nodes_values:
for min_samples_split in min_samples_split_values:
# Initialize the tree with the current set of parameters
estimator = DecisionTreeClassifier(
max_depth=max_depth,
max_leaf_nodes=max_leaf_nodes,
min_samples_split=min_samples_split,
class_weight='balanced',
random_state=42
)
# Fit the model to the training data
estimator.fit(X_train, y_train)
# Make predictions on the training and test sets
y_train_pred = estimator.predict(X_train)
y_test_pred = estimator.predict(X_test)
# Calculate recall scores for training and test sets
train_recall_score = recall_score(y_train, y_train_pred)
test_recall_score = recall_score(y_test, y_test_pred)
# Calculate the absolute difference between training and test recall scores
score_diff = abs(train_recall_score - test_recall_score)
# Update the best estimator and best score if the current one has a smaller score difference
if (score_diff < best_score_diff) & (test_recall_score > best_test_score):
best_score_diff = score_diff
best_test_score = test_recall_score
best_estimator = estimator
# Print the best parameters
print("Best parameters found:")
print(f"Max depth: {best_estimator.max_depth}")
print(f"Max leaf nodes: {best_estimator.max_leaf_nodes}")
print(f"Min samples split: {best_estimator.min_samples_split}")
print(f"Best test recall score: {best_test_score}")
# Fit the best algorithm to the data.
estimator = best_estimator
estimator.fit(X_train, y_train)
Checking performance on training data
confusion_matrix_sklearn(estimator, X_train, y_train)
decision_tree_tune_perf_train = model_performance_classification_sklearn(estimator, X_train, y_train) # The code to check performance on train data
decision_tree_tune_perf_train
#print confusion matrix
confusion_matrix_sklearn(estimator, X_test, y_test)
#print the recall score
decision_tree_tune_perf_test = model_performance_classification_sklearn(estimator, X_test, y_test)
decision_tree_tune_perf_test
Visualizing the Decision Tree
plt.figure(figsize=(10, 10))
out = tree.plot_tree(
estimator,
feature_names=feature_names,
filled=True,
fontsize=9,
node_ids=False,
class_names=None,
)
# below code will add arrows to the decision tree split if they are missing
for o in out:
arrow = o.arrow_patch
if arrow is not None:
arrow.set_edgecolor("black")
arrow.set_linewidth(1)
plt.show()
# Text report showing the rules of a decision tree -
print(tree.export_text(estimator, feature_names=feature_names, show_weights=True))
# importance of features in the tree building ( The importance of a feature is computed as the
# (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance )
print(
pd.DataFrame(
estimator.feature_importances_, columns=["Imp"], index=X_train.columns
).sort_values(by="Imp", ascending=False)
)
importances = estimator.feature_importances_
indices = np.argsort(importances)
plt.figure(figsize=(8, 8))
plt.title("Feature Importances")
plt.barh(range(len(indices)), importances[indices], color="violet", align="center")
plt.yticks(range(len(indices)), [feature_names[i] for i in indices])
plt.xlabel("Relative Importance")
plt.show()
Checking performance on test data
confusion_matrix_sklearn(best_estimator, X_test, y_test) # The code to get the confusion matrix on test data
decision_tree_tune_perf_test = model_performance_classification_sklearn(best_estimator, X_test, y_test) # The code to check performance on test data
decision_tree_tune_perf_test
clf = DecisionTreeClassifier(random_state=1)
path = clf.cost_complexity_pruning_path(X_train, y_train)
ccp_alphas, impurities = path.ccp_alphas, path.impurities
pd.DataFrame(path)
fig, ax = plt.subplots(figsize=(10, 5))
ax.plot(ccp_alphas[:-1], impurities[:-1], marker="o", drawstyle="steps-post")
ax.set_xlabel("effective alpha")
ax.set_ylabel("total impurity of leaves")
ax.set_title("Total Impurity vs effective alpha for training set")
plt.show()
clfs = []
for ccp_alpha in ccp_alphas:
clf = DecisionTreeClassifier(random_state=1, ccp_alpha=ccp_alpha)
clf.fit(X_train, y_train)
clfs.append(clf)
print(
"Number of nodes in the last tree is: {} with ccp_alpha: {}".format(
clfs[-1].tree_.node_count, ccp_alphas[-1]
)
)
clfs = clfs[:-1]
ccp_alphas = ccp_alphas[:-1]
node_counts = [clf.tree_.node_count for clf in clfs]
depth = [clf.tree_.max_depth for clf in clfs]
fig, ax = plt.subplots(2, 1, figsize=(10, 7))
ax[0].plot(ccp_alphas, node_counts, marker="o", drawstyle="steps-post")
ax[0].set_xlabel("alpha")
ax[0].set_ylabel("number of nodes")
ax[0].set_title("Number of nodes vs alpha")
ax[1].plot(ccp_alphas, depth, marker="o", drawstyle="steps-post")
ax[1].set_xlabel("alpha")
ax[1].set_ylabel("depth of tree")
ax[1].set_title("Depth vs alpha")
fig.tight_layout()
Recall vs alpha for training and testing sets
recall_train = []
for clf in clfs:
pred_train = clf.predict(X_train)
values_train = recall_score(y_train, pred_train)
recall_train.append(values_train)
recall_test = []
for clf in clfs:
pred_test = clf.predict(X_test)
values_test = recall_score(y_test, pred_test)
recall_test.append(values_test)
fig, ax = plt.subplots(figsize=(15, 5))
ax.set_xlabel("alpha")
ax.set_ylabel("Recall")
ax.set_title("Recall vs alpha for training and testing sets")
ax.plot(ccp_alphas, recall_train, marker="o", label="train", drawstyle="steps-post")
ax.plot(ccp_alphas, recall_test, marker="o", label="test", drawstyle="steps-post")
ax.legend()
plt.show()
index_best_model = np.argmax(recall_test)
best_model = clfs[index_best_model]
print(best_model)
estimator_2 = DecisionTreeClassifier(
ccp_alpha=0.01, class_weight={0: 0.15, 1: 0.85}, random_state=1
)
estimator_2.fit(X_train, y_train)
Checking performance on training data
confusion_matrix_sklearn(best_estimator, X_train, y_train) # The code to create confusion matrix for train data
decision_tree_tune_post_train = model_performance_classification_sklearn(best_estimator, X_test, y_test) # The code to check performance on train data
decision_tree_tune_post_train
Visualizing the Decision Tree
plt.figure(figsize=(10, 10))
out = tree.plot_tree(
estimator_2,
feature_names=feature_names,
filled=True,
fontsize=9,
node_ids=False,
class_names=None,
)
# below code will add arrows to the decision tree split if they are missing
for o in out:
arrow = o.arrow_patch
if arrow is not None:
arrow.set_edgecolor("black")
arrow.set_linewidth(1)
plt.show()
# Text report showing the rules of a decision tree -
print(tree.export_text(estimator_2, feature_names=feature_names, show_weights=True))
# importance of features in the tree building ( The importance of a feature is computed as the
# (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance )
print(
pd.DataFrame(
estimator_2.feature_importances_, columns=["Imp"], index=X_train.columns
).sort_values(by="Imp", ascending=False)
)
importances = estimator_2.feature_importances_
indices = np.argsort(importances)
plt.figure(figsize=(8, 8))
plt.title("Feature Importances")
plt.barh(range(len(indices)), importances[indices], color="violet", align="center")
plt.yticks(range(len(indices)), [feature_names[i] for i in indices])
plt.xlabel("Relative Importance")
plt.show()
Checking performance on test data
confusion_matrix_sklearn(best_estimator, X_test, y_test) # The code to get the confusion matrix on test data
decision_tree_tune_post_test = model_performance_classification_sklearn(best_estimator, X_test, y_test) ## The code to get the model performance on test data
decision_tree_tune_post_test
Model performance improved significantly through hyperparameter tuning, especially for tree-based models like Decision Trees and Random Forests.
Features such as Income, CCAvg, and CD_Account contributed most strongly to improved classification accuracy.
Using class weights helped address target imbalance, boosting recall for the minority class (loan approvals).
Feature selection and encoding strategies (e.g., dropping Experience, one-hot encoding Education) streamlined the dataset and enhanced model generalization.
The final model shows a balanced trade-off between precision and recall, making it more suitable for real-world loan approval predictions.
# training performance comparison
models_train_comp_df = pd.concat(
[decision_tree_perf_train.T, decision_tree_tune_perf_train.T, decision_tree_tune_post_train.T], axis=1,
)
models_train_comp_df.columns = ["Decision Tree (sklearn default)", "Decision Tree (Pre-Pruning)", "Decision Tree (Post-Pruning)"]
print("Training performance comparison:")
models_train_comp_df
# testing performance comparison
models_test_comp_df = pd.concat(
[decision_tree_perf_test.T, decision_tree_tune_perf_test.T, decision_tree_tune_post_test.T], axis=1,
)
models_test_comp_df.columns = ["Decision Tree (sklearn default)", "Decision Tree (Pre-Pruning)", "Decision Tree (Post-Pruning)"]
print("Test set performance comparison:")
models_test_comp_df
Calculating the financial impact of implementing our machine learning model for targeted marketing campaigns.
# Comprehensive ROI Analysis Function
def calculate_comprehensive_roi(model_precision, model_recall,
baseline_conversion=0.096,
campaign_cost_per_customer=50,
avg_loan_profit=2000,
campaign_size=10000):
"""
Calculate comprehensive business ROI from model implementation
"""
print("=== 💰 COMPREHENSIVE BUSINESS ROI ANALYSIS ===")
print(f"📋 Analysis Parameters:")
print(f" 💵 Campaign cost per customer: ${campaign_cost_per_customer}")
print(f" 💰 Average profit per loan: ${avg_loan_profit:,}")
print(f" 👥 Campaign size: {campaign_size:,} customers")
print(f" 📈 Current baseline conversion: {baseline_conversion:.1%}")
print(f" 🎯 Model precision: {model_precision:.1%}")
print(f" 📊 Model recall: {model_recall:.1%}")
# Define different targeting scenarios
scenarios = {
'Current Strategy (Random)': {
'description': 'Random customer selection',
'conversion_rate': baseline_conversion,
'customers_targeted': campaign_size,
'targeting_efficiency': 1.0
},
'Model-Based (Top 20%)': {
'description': 'Target top 20% probability customers',
'conversion_rate': model_precision * 0.8, # Conservative estimate
'customers_targeted': int(campaign_size * 0.2),
'targeting_efficiency': 0.8
},
'Model-Based (Top 10%)': {
'description': 'Target top 10% probability customers',
'conversion_rate': model_precision * 0.9, # Higher precision with smaller target
'customers_targeted': int(campaign_size * 0.1),
'targeting_efficiency': 0.9
},
'Hybrid Approach (Top 30%)': {
'description': 'Balanced approach - top 30% with personalized offers',
'conversion_rate': model_precision * 0.7,
'customers_targeted': int(campaign_size * 0.3),
'targeting_efficiency': 0.7
}
}
roi_results = []
for scenario_name, params in scenarios.items():
customers_targeted = params['customers_targeted']
conversion_rate = params['conversion_rate']
# Calculate financial metrics
conversions = customers_targeted * conversion_rate
revenue = conversions * avg_loan_profit
cost = customers_targeted * campaign_cost_per_customer
profit = revenue - cost
roi_percentage = (profit / cost * 100) if cost > 0 else 0
# Cost per acquisition
cost_per_acquisition = cost / conversions if conversions > 0 else float('inf')
# Efficiency metrics
customers_saved = campaign_size - customers_targeted
cost_savings = customers_saved * campaign_cost_per_customer
roi_results.append({
'Scenario': scenario_name,
'Strategy': params['description'],
'Customers_Targeted': f"{customers_targeted:,}",
'Conversion_Rate': f"{conversion_rate:.1%}",
'Expected_Conversions': f"{conversions:.0f}",
'Revenue': f"${revenue:,.0f}",
'Marketing_Cost': f"${cost:,.0f}",
'Net_Profit': f"${profit:,.0f}",
'ROI': f"{roi_percentage:.1f}%",
'Cost_per_Acquisition': f"${cost_per_acquisition:.0f}" if cost_per_acquisition != float('inf') else "N/A",
'Cost_Savings': f"${cost_savings:,.0f}"
})
roi_df = pd.DataFrame(roi_results)
print("\n=== 📊 ROI COMPARISON TABLE ===")
display(roi_df)
# Calculate improvements over baseline
baseline_roi = float(roi_df.iloc[0]['ROI'].replace('%', ''))
baseline_profit = float(roi_df.iloc[0]['Net_Profit'].replace('$', '').replace(',', ''))
print(f"\n=== 🚀 KEY BUSINESS IMPROVEMENTS ===")
for i in range(1, len(roi_df)):
scenario = roi_df.iloc[i]['Scenario']
model_roi = float(roi_df.iloc[i]['ROI'].replace('%', ''))
model_profit = float(roi_df.iloc[i]['Net_Profit'].replace('$', '').replace(',', ''))
roi_improvement = model_roi - baseline_roi
profit_improvement = model_profit - baseline_profit
profit_increase_pct = (profit_improvement / baseline_profit * 100) if baseline_profit > 0 else 0
print(f"\n🎯 {scenario}:")
print(f" 📈 ROI Improvement: +{roi_improvement:.1f} percentage points")
print(f" 💰 Additional Profit: ${profit_improvement:,.0f} per campaign")
print(f" 📊 Profit Increase: {profit_increase_pct:.1f}%")
# Annual projections
annual_campaigns = 4
best_scenario_idx = roi_df['ROI'].str.replace('%', '').astype(float).idxmax()
best_scenario = roi_df.iloc[best_scenario_idx]
best_profit = float(best_scenario['Net_Profit'].replace('$', '').replace(',', ''))
annual_profit_improvement = (best_profit - baseline_profit) * annual_campaigns
print(f"\n=== 📅 ANNUAL PROJECTIONS ===")
print(f"🏆 Best Strategy: {best_scenario['Scenario']}")
print(f"📅 Campaigns per year: {annual_campaigns}")
print(f"💎 Annual additional profit: ${annual_profit_improvement:,.0f}")
print(f"🎯 Total annual profit potential: ${best_profit * annual_campaigns:,.0f}")
return roi_df
# Example calculation (replace with actual model metrics)
# Assuming best model achieved 92.7% precision and 93.3% recall
model_precision = 0.927 # Replace with actual best model precision
model_recall = 0.933 # Replace with actual best model recall
print("🎯 Calculating ROI based on model performance...")
roi_analysis = calculate_comprehensive_roi(model_precision, model_recall)
# Create ROI Visualization Dashboard
def create_roi_dashboard(roi_df):
"""
Create comprehensive ROI visualization dashboard
"""
fig, axes = plt.subplots(2, 2, figsize=(16, 12))
fig.suptitle('💰 Business ROI Analysis Dashboard', fontsize=16, fontweight='bold')
# Extract numeric values for plotting
scenarios = roi_df['Scenario'].tolist()
roi_values = [float(x.replace('%', '')) for x in roi_df['ROI']]
profit_values = [float(x.replace('$', '').replace(',', '')) for x in roi_df['Net_Profit']]
cost_values = [float(x.replace('$', '').replace(',', '')) for x in roi_df['Marketing_Cost']]
conversions = [float(x.replace(',', '')) for x in roi_df['Expected_Conversions']]
# 1. ROI Comparison
colors = ['red' if i == 0 else 'green' for i in range(len(scenarios))]
bars1 = axes[0,0].bar(range(len(scenarios)), roi_values, color=colors, alpha=0.7)
axes[0,0].set_title('📊 ROI Comparison by Strategy', fontweight='bold')
axes[0,0].set_ylabel('ROI (%)')
axes[0,0].set_xticks(range(len(scenarios)))
axes[0,0].set_xticklabels([s.replace(' ', '\n') for s in scenarios], rotation=0, fontsize=9)
# Add value labels
for i, bar in enumerate(bars1):
height = bar.get_height()
axes[0,0].text(bar.get_x() + bar.get_width()/2., height + 5,
f'{height:.1f}%', ha='center', va='bottom', fontweight='bold')
# 2. Profit Comparison
bars2 = axes[0,1].bar(range(len(scenarios)), profit_values, color=colors, alpha=0.7)
axes[0,1].set_title('💰 Net Profit Comparison', fontweight='bold')
axes[0,1].set_ylabel('Net Profit ($)')
axes[0,1].set_xticks(range(len(scenarios)))
axes[0,1].set_xticklabels([s.replace(' ', '\n') for s in scenarios], rotation=0, fontsize=9)
# Add value labels
for i, bar in enumerate(bars2):
height = bar.get_height()
axes[0,1].text(bar.get_x() + bar.get_width()/2., height + max(profit_values)*0.02,
f'${height:,.0f}', ha='center', va='bottom', fontweight='bold', fontsize=8)
# 3. Cost vs Revenue
x_pos = np.arange(len(scenarios))
width = 0.35
revenue_values = [p + c for p, c in zip(profit_values, cost_values)]
bars3a = axes[1,0].bar(x_pos - width/2, cost_values, width, label='Marketing Cost', color='lightcoral', alpha=0.7)
bars3b = axes[1,0].bar(x_pos + width/2, revenue_values, width, label='Revenue', color='lightgreen', alpha=0.7)
axes[1,0].set_title('💵 Cost vs Revenue Analysis', fontweight='bold')
axes[1,0].set_ylabel('Amount ($)')
axes[1,0].set_xticks(x_pos)
axes[1,0].set_xticklabels([s.replace(' ', '\n') for s in scenarios], rotation=0, fontsize=9)
axes[1,0].legend()
# 4. Conversion Efficiency
bars4 = axes[1,1].bar(range(len(scenarios)), conversions, color=colors, alpha=0.7)
axes[1,1].set_title('🎯 Expected Conversions by Strategy', fontweight='bold')
axes[1,1].set_ylabel('Number of Conversions')
axes[1,1].set_xticks(range(len(scenarios)))
axes[1,1].set_xticklabels([s.replace(' ', '\n') for s in scenarios], rotation=0, fontsize=9)
# Add value labels
for i, bar in enumerate(bars4):
height = bar.get_height()
axes[1,1].text(bar.get_x() + bar.get_width()/2., height + max(conversions)*0.02,
f'{height:.0f}', ha='center', va='bottom', fontweight='bold')
plt.tight_layout()
plt.show()
# Create ROI dashboard
create_roi_dashboard(roi_analysis)
Based on the analysis and predictive modeling, the following recommendations are suggested to support the bank’s marketing and customer targeting strategies:
Target High-Income Segments
Customers with higher income and higher average monthly credit card spending (CCAvg) show a greater likelihood of accepting personal loans.
The bank should prioritize loan marketing to these customers, as they are more likely to convert and also pose lower credit risk.
Focus on Educated Customers
Education level is positively correlated with loan acceptance. Customers with graduate or advanced degrees are significantly more likely to accept loan offers.
Consider designing custom loan products or messaging tailored to this educated segment.
Leverage Cross-Selling Opportunities
Ownership of a CD account is a strong predictor of loan uptake. Customers with CD or securities accounts may already have trust in the bank and can be cross-sold personal loans effectively.
Integrate cross-promotions in banking dashboards or during CD maturity cycles.
Optimize Marketing by Age Group
Customers aged 30 to 40 years exhibit the highest likelihood of accepting personal loans.
Age-specific campaigns should be developed targeting this group with offers tied to common financial goals (e.g., home renovation, family planning, etc.).
Address Class Imbalance in Future Models
The current dataset shows an imbalance with relatively few positive loan takers.
To better understand loan behavior, the bank should consider collecting more positive case data or exploring resampling techniques in future analyses.
De-emphasize ZIP Code
ZIP code, although initially included, had high cardinality with low predictive value and may unnecessarily complicate modeling.
It can be excluded unless linked to specific regional business rules.